A Polynomial-Time Descent Method for Separable Convex Optimization Problems with Linear Constraints

نویسنده

  • Sergei Chubanov
چکیده

We propose a polynomial algorithm for a separable convex optimization problem with linear constraints. We do not make any additional assumptions about the structure of the objective function except for polynomial computability. That is, the objective function can be non-differentiable. The running time of our algorithm is polynomial in the size of the input consisting of an instance of the problem and the accuracy with which the problem is to be solved. Our algorithm uses an oracle for solving auxiliary systems of linear inequalities. This oracle can be any polynomial algorithm for linear programming.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Large-scale randomized-coordinate descent methods with non-separable linear constraints

We develop randomized block coordinate descent (CD) methods for linearly constrained convex optimization. Unlike other large-scale CD methods, we do not assume the constraints to be separable, but allow them be coupled linearly. To our knowledge, ours is the first CD method that allows linear coupling constraints, without making the global iteration complexity have an exponential dependence on ...

متن کامل

A block coordinate gradient descent method for regularized convex separable optimization and covariance selection

We consider a class of unconstrained nonsmooth convex optimization problems, in which the objective function is the sum of a convex smooth function on an open subset of matrices and a separable convex function on a set of matrices. This problem includes the covariance selection estimation problem that can be expressed as an `1-penalized maximum likelihood estimation problem. In this paper, we p...

متن کامل

Efficient parallel coordinate descent algorithm for convex optimization problems with separable constraints: application to distributed MPC

In this paper we propose a parallel coordinate descent algorithm for solving smooth convex optimization problems with separable constraints that may arise e.g. in distributed model predictive control (MPC) for linear network systems. Our algorithm is based on block coordinate descent updates in parallel and has a very simple iteration. We prove (sub)linear rate of convergence for the new algori...

متن کامل

Polynomial approximation method for stochastic programming

POLYNOMIAL APPROXIMATION METHOD FOR STOCHASTIC PROGRAMMING Dongxue Ma October 2nd, 2009 Two stage stochastic programming is an important part in the whole area of stochastic programming, and is widely spread in multiple disciplines, such as financial management, risk management, and logistics. The two stage stochastic programming is a natural extension of linear programming by incorporating unc...

متن کامل

A Convergent 3-Block SemiProximal Alternating Direction Method of Multipliers for Conic Programming with 4-Type Constraints

In this paper, we consider conic programming problems whose constraints consist of linear equalities, linear inequalities, a nonpolyhedral cone, and a polyhedral cone. A convenient way for solving this class of problems is to apply the directly extended alternating direction method of multipliers (ADMM) to its dual problem, which has been observed to perform well in numerical computations but m...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • SIAM Journal on Optimization

دوره 26  شماره 

صفحات  -

تاریخ انتشار 2016